613 research outputs found

    Short-term climate response to a freshwater pulse in the Southern Ocean

    Get PDF
    The short-term response of the climate system to a freshwater anomaly in the Southern Ocean is investigated using a coupled global climate model. As a result of the anomaly, ventilation of deep waters around Antarctica is inhibited, causing a warming of the deep ocean, and a cooling of the surface. The surface cooling causes Antarctic sea-ice to thicken and increase in extent, and this leads to a cooling of Southern Hemisphere surface air temperature. The surface cooling increases over the first 5 years, then remains constant over the next 5 years. There is a more rapid response in the Pacific Ocean, which transmits a signal to the Northern Hemisphere, ultimately causing a shift to the negative phase of the North Atlantic Oscillation in years 5–10

    On binary reflected Gray codes and functions

    Get PDF
    AbstractThe binary reflected Gray code function b is defined as follows: If m is a nonnegative integer, then b(m) is the integer obtained when initial zeros are omitted from the binary reflected Gray code of m.This paper examines this Gray code function and its inverse and gives simple algorithms to generate both. It also simplifies Conder's result that the jth letter of the kth word of the binary reflected Gray code of length n is 2n-2n-j-1⌊2n-2n-j-1-k/2⌋mod2by replacing the binomial coefficient by k-12n-j+1+12

    Multiple imputation with missing indicators as proxies for unmeasured variables: simulation study

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2019-11-18, accepted 2020-06-28, registration 2020-06-29, pub-electronic 2020-07-08, online 2020-07-08, collection 2020-12Publication status: PublishedFunder: Medical Research Council; doi: http://dx.doi.org/10.13039/501100000265; Grant(s): MR/T025085/1Abstract: Background: Within routinely collected health data, missing data for an individual might provide useful information in itself. This occurs, for example, in the case of electronic health records, where the presence or absence of data is informative. While the naive use of missing indicators to try to exploit such information can introduce bias, its use in conjunction with multiple imputation may unlock the potential value of missingness to reduce bias in causal effect estimation, particularly in missing not at random scenarios and where missingness might be associated with unmeasured confounders. Methods: We conducted a simulation study to determine when the use of a missing indicator, combined with multiple imputation, would reduce bias for causal effect estimation, under a range of scenarios including unmeasured variables, missing not at random, and missing at random mechanisms. We use directed acyclic graphs and structural models to elucidate a variety of causal structures of interest. We handled missing data using complete case analysis, and multiple imputation with and without missing indicator terms. Results: We find that multiple imputation combined with a missing indicator gives minimal bias for causal effect estimation in most scenarios. In particular the approach: 1) does not introduce bias in missing (completely) at random scenarios; 2) reduces bias in missing not at random scenarios where the missing mechanism depends on the missing variable itself; and 3) may reduce or increase bias when unmeasured confounding is present. Conclusion: In the presence of missing data, careful use of missing indicators, combined with multiple imputation, can improve causal effect estimation when missingness is informative, and is not detrimental when missingness is at random

    Geographic Variation in the Structure of Kentucky’s Population Health Systems: An Urban, Rural, and Appalachian Comparison

    Get PDF
    Introduction: Research examining geographic variation in the structure of population health systems is continuing to emerge, and most of the evidence that currently exists divides systems by urban and rural designation. Very little is understood about how being rural and Appalachian impacts population health system structure and strength. Purpose: This study examines geographic differences in key characteristics of population health systems in urban, rural non-Appalachian, and rural Appalachian regions of Kentucky. Methods: Data from a 2018 statewide survey of community networks was used to examine population health system characteristics. Descriptive statistics were generated to examine variation across geographic regions in the availability of 20 population health activities, the range of organizations that contribute to those activities, and system strength. Data were collected in 2018 and analyzed in 2020. Results: Variation in the provision of population health protections and the structure of public health systems across KY exists. Urban communities are more likely than rural to have a comprehensive set of population health protections delivered in collaboration with a diverse set of multisector partners. Rural Appalachian communities face additional limited capacity in the delivery of population health activities, compared to other rural communities in the state. Implications: Understanding the delivery of population health provides further insight into additional system-level factors that may drive persistent health inequities in rural and Appalachian communities. The capacity to improve health happens beyond the clinic, and the strengthening of population health systems will be a critical step in efforts to improve population health

    Summary of Results from the 2016 National Health Security Preparedness Index

    Get PDF
    The National Health Security Preparedness Index tracks the nation’s progress in preparing for, responding to, and recovering from disasters and other large-scale emergencies that pose risks to health and well-being in the United States. Because health security is a responsibility shared by many different stakeholders in government and society, the Index combines measures from multiple sources and perspectives to offer a broad view of the health protections in place for nation as a whole and for each U.S. state. The Index identifies strengths as well as gaps in the protections needed to keep people safe and healthy in the face of disasters, and it tracks how these protections vary across the U.S. and change over time. Results from the 2016 release of the Index, containing data from 2013 through 2015, reveal that preparedness is improving overall, but protections remain uneven across the U.S., and they are losing strength in some critical areas

    Summary of Proposed Updates to the National Health Security Preparedness Index for 2015-2016

    Get PDF
    This report describes proposed updates in methodology and measures for the 2015-16 release of the National Health Security Preparedness Inde

    Advanced cardiovascular risk prediction in the emergency department: updating a clinical prediction model - a large database study protocol.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: ppub 2021-10-01, epub 2021-10-07Publication status: PublishedFunder: Department of Health; Grant(s): NIHR300246Funder: national institute for health research; Grant(s): NIHR300246BackgroundPatients presenting with chest pain represent a large proportion of attendances to emergency departments. In these patients clinicians often consider the diagnosis of acute myocardial infarction (AMI), the timely recognition and treatment of which is clinically important. Clinical prediction models (CPMs) have been used to enhance early diagnosis of AMI. The Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid is currently in clinical use across Greater Manchester. CPMs have been shown to deteriorate over time through calibration drift. We aim to assess potential calibration drift with T-MACS and compare methods for updating the model.MethodsWe will use routinely collected electronic data from patients who were treated using TMACS at two large NHS hospitals. This is estimated to include approximately 14,000 patient episodes spanning June 2016 to October 2020. The primary outcome of acute myocardial infarction will be sourced from NHS Digital's admitted patient care dataset. We will assess the calibration drift of the existing model and the benefit of updating the CPM by model recalibration, model extension and dynamic updating. These models will be validated by bootstrapping and one step ahead prequential testing. We will evaluate predictive performance using calibrations plots and c-statistics. We will also examine the reclassification of predicted probability with the updated TMACS model.DiscussionCPMs are widely used in modern medicine, but are vulnerable to deteriorating calibration over time. Ongoing refinement using routinely collected electronic data will inevitably be more efficient than deriving and validating new models. In this analysis we will seek to exemplify methods for updating CPMs to protect the initial investment of time and effort. If successful, the updating methods could be used to continually refine the algorithm used within TMACS, maintaining or even improving predictive performance over time.Trial registrationISRCTN number: ISRCTN41008456

    Developing clinical prediction models when adhering to minimum sample size recommendations: The importance of quantifying bootstrap variability in tuning parameters and predictive performance

    Get PDF
    From SAGE Publishing via Jisc Publications RouterHistory: epub 2021-10-08Publication status: PublishedFunder: Medical Research Council; FundRef: https://doi.org/10.13039/501100000265; Grant(s): MR/T025085/1Funder: NIHR Biomedical Research Centre, Oxford, and Cancer Research UK; Grant(s): C49297/A27294Recent minimum sample size formula (Riley et al.) for developing clinical prediction models help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression clinical prediction models using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalisation further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalisation methods in external data. We recommend that penalisation methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed clinical prediction model will perform well in new individuals from the same population as was used for model development

    Impact of sample size on the stability of risk scores from clinical prediction models: a case study in cardiovascular disease

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2020-02-25, accepted 2020-08-12, registration 2020-08-13, pub-electronic 2020-09-09, online 2020-09-09, collection 2020-12Publication status: PublishedFunder: Medical Research Council; doi: http://dx.doi.org/10.13039/501100000265; Grant(s): MR/N013751/1Abstract: Background: Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in the national guidelines, and those based on recently published sample size formula for prediction models. Methods: We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100,000, 50,000, 10,000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5–95th percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population-derived risk) to summarise results. Results: For a sample size of 100,000, the median 5–95th percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population-derived risks of 4–5%, 9–10%, 14–15% and 19–20% respectively; for N = 10,000, it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula-derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained. Conclusions: Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting, there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models
    • …
    corecore